Conference Proceedings
Learning Robust Representations of Text
Y Li, T Cohn, T Baldwin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) Short papers | Association for Computational Linguistics | Published : 2016
DOI: 10.18653/v1/d16-1207
Abstract
Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets with a convolutional neural network shows that, compared to a baseline model and the dropout method, our method achieves superior performance over noisy inputs and out-of-domain data.
Related Projects (2)
Grants
Awarded by Australian Research Council